3 research outputs found

    An enhanced particle swarm optimization method integrated with evolutionary game theory

    Get PDF
    This paper describes a novel particle swarm optimizer algorithm. The focus of this study is how to improve the performance of the classical particle swarm optimization approach, i.e., how to enhance its convergence speed and capacity to solve complex problems while reducing the computational load. The proposed approach is based on an improvement of particle swarm optimization using evolutionary game theory. This method maintains the capability of the particle swarm optimizer to diversify the particles' exploration in the solution space. Moreover, the proposed approach provides an important ability to the optimization algorithm, that is, adaptation of the search direction, which improves the quality of the particles based on their experience. The proposed algorithm is tested on a representative set of continuous benchmark optimization problems and compared with some other classical optimization approaches. Based on the test results of each benchmark problem, its performance is analyzed and discussed

    Essaim hétérogène pour le combat collaboratif reposant sur de l’apprentissage par renforcement multi-agent.

    No full text
    International audienceDevelopment of future weapon systems heavily relies on simulations where several agents interact in an environment. In such an environment, optimised decisions are crucial to leverage the capability of collaborative systems in a congested, cluttered, contested, connected, constrained [1] battlespace. In this work we present a simplified, yet complex, scenario derived from a notional air-to-surface mission featuring a set of heterogeneous effectors collaborating over a communication network in order to detect and engage a number of defended, relocatable ground targets. We propose a method based on a multi-agent, distributed version of SAC (Soft-Actor-Critic) and we highlight three benefits of our method: 1-Reinforcement Learning (RL) outperforms greedy and semirandom algorithms on a set of operational metrics. 2-Collaboration improves the global performance of the teamed agents. 3-Curriculum Learning is central to improve and speed-up the training of the agents
    corecore